Topology optimization methods with gradient-free perimeter approximation

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Topology optimization methods with gradient-free perimeter approximation

In this paper we introduce a family of smooth perimeter approximating functionals designed to be incorporated within topology optimization algorithms. The required mathematical properties, namely the Γ-convergence and the compactness of sequences of minimizers, are first established. Then we propose several methods for the solution of topology optimization problems with perimeter penalization s...

متن کامل

Regularized Perimeter for Topology Optimization

Abstract. The perimeter functional is known to oppose serious difficulties when it has to be handled within a topology optimization procedure. In this paper, a regularized perimeter functional Perε, defined for 2d and 3d domains, is introduced. On one hand, the convergence of Perε to the exact perimeter when ε tends to zero is proved. On the other hand, the topological differentiability of Perε...

متن کامل

Distributed Gradient Optimization with Embodied Approximation

We present an informal description of a general approach for developing decentralized distributed gradient descent optimization algorithms for teams of embodied agents that need to rearrange their configuration over space and/or time, into some optimal and initially unknown configuration. Our approach relies on using embodiment and spatial embeddedness as a surrogate for computational resources...

متن کامل

Learning Supervised PageRank with Gradient-Based and Gradient-Free Optimization Methods

In this paper, we consider a non-convex loss-minimization problem of learning Supervised PageRank models, which can account for features of nodes and edges. We propose gradient-based and random gradient-free methods to solve this problem. Our algorithms are based on the concept of an inexact oracle and unlike the state-ofthe-art gradient-based method we manage to provide theoretically the conve...

متن کامل

Discretization-free Knowledge Gradient Methods for Bayesian Optimization

This paper studies Bayesian ranking and selection (R&S) problems with correlated prior beliefs and continuous domains, i.e. Bayesian optimization (BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely studied for discrete R&S problems, which sample the one-step Bayes-optimal point. When used over continuous domains, previous work on the knowledge gradient [Scott et al., ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Interfaces and Free Boundaries

سال: 2012

ISSN: 1463-9963

DOI: 10.4171/ifb/286